skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Schwartz, Edward"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Topological spin textures (e.g., skyrmions) can be stabilized by interfacial Dzyaloshinskii‐Moriya interaction (DMI) in the magnetic multilayer, which has been intensively studied. Recently, Bloch‐type magnetic skyrmions stabilized by composition gradient‐induced DMI (g‐DMI) have been observed in 10‐nm thick CoPt single layer. However, magnetic anisotropy in gradient‐composition engineered CoPt (g‐CoPt) films is highly sensitive to both the relative Co/Pt composition and the film thickness, leading to a complex interplay with g‐DMI. The stability of skyrmions under the combined influence of magnetic anisotropy and g‐DMI is crucial yet remains poorly understood. Here, we condcut a systematic study on the characteristics of magnetic skyrmions as a function of gradient polarity and effective gradient (defined as gradient/thickness) in g‐CoPt single layers (thickness of 10–30 nm) using magnetic force microscopy (MFM), bulk magnetometry, and topological Hall effect measurements. Brillouin light scattering spectroscopy confirms that both the sign and magnitude of g‐DMI depend on the polarity and amplitude of the composition gradient in g‐CoPt films. MFM reveals that skyrmion size and density vary with g‐CoPt film thickness, gradient polarity, and applied magnetic field. An increased skyrmion density is observed in samples exhibiting higher magnetic anisotropy, in agreement with micromagnetic simulations and energy barrier calculations. 
    more » « less
    Free, publicly-accessible full text available July 26, 2026
  2. Abstract Field-free switching of perpendicular magnetization has been observed in an epitaxial L1$$_1$$-ordered CoPt/CuPt bilayer and attributed to spin-orbit torque (SOT) arising from the crystallographic $3m$ point group of the interface. Using a first-principles nonequilibrium Green’s function formalism combined with the Anderson disorder model, we calculate the angular dependence of the SOT in a CoPt/CuPt bilayer and find that the magnitude of the $3m$$ SOT is about 20\% of the conventional dampinglike SOT. We further study the magnetization dynamics in perpendicularly magnetized films in the presence of $$3m$ SOT and Dzyaloshinskii-Moriya interaction, using the equations of motion for domain wall dynamics and micromagnetic simulations. We find that for systems with strong interfacial DMI characterized by the N'eel character of domain walls, a very large current density is required to achieve deterministic switching because reorientation of the magnetization inside the domain wall is necessary to induce the switching asymmetry. For thicker films with relatively weak interfacial DMI and the Bloch character of domain walls the deterministic switching with much smaller currents is possible, which agrees with recent experimental findings. 
    more » « less
  3. Free, publicly-accessible full text available November 12, 2025
  4. Variable names are critical for conveying intended program behavior. Machine learning-based program analysis methods use variable name representations for a wide range of tasks, such as suggesting new variable names and bug detection. Ideally, such methods could capture semantic relationships between names beyond syntactic similarity, e.g., the fact that the names average and mean are similar. Unfortunately, previous work has found that even the best of previous representation approaches primarily capture "relatedness" (whether two variables are linked at all), rather than "similarity" (whether they actually have the same meaning). We propose VarCLR, a new approach for learning semantic representations of variable names that effectively captures variable similarity in this stricter sense. We observe that this problem is an excellent fit for contrastive learning, which aims to minimize the distance between explicitly similar inputs, while maximizing the distance between dissimilar inputs. This requires labeled training data, and thus we construct a novel, weakly-supervised variable renaming dataset mined from GitHub edits. We show that VarCLR enables the effective application of sophisticated, general-purpose language models like BERT, to variable name representation and thus also to related downstream tasks like variable name similarity search or spelling correction. VarCLR produces models that significantly outperform the state-of-the-art on IdBench, an existing benchmark that explicitly captures variable similarity (as distinct from relatedness). Finally, we contribute a release of all data, code, and pre-trained models, aiming to provide a drop-in replacement for variable representations used in either existing or future program analyses that rely on variable names. 
    more » « less
  5. The decompiler is one of the most common tools for examining executable binaries without the corresponding source code. It transforms binaries into high-level code, reversing the compilation process. Unfortunately, decompiler output is far from readable because the decompilation process is often incomplete. State-of-the-art techniques use machine learning to predict missing information like variable names. While these approaches are often able to suggest good variable names in context, no existing work examines how the selection of training data influences these machine learning models. We investigate how data provenance and the quality of training data affect performance, and how well, if at all, trained models generalize across software domains. We focus on the variable renaming problem using one such machine learning model, DIRE . We first describe DIRE in detail and the accompanying technique used to generate training data from raw code. We also evaluate DIRE ’s overall performance without respect to data quality. Next, we show how training on more popular, possibly higher quality code (measured using GitHub stars) leads to a more generalizable model because popular code tends to have more diverse variable names. Finally, we evaluate how well DIRE predicts domain-specific identifiers, propose a modification to incorporate domain information, and show that it can predict identifiers in domain-specific scenarios 23% more frequently than the original DIRE model. 
    more » « less